# Generative Image Understanding
Git Large Textvqa
MIT
GIT is a vision-language model based on a Transformer decoder, trained with dual conditioning on CLIP image tokens and text tokens, specifically optimized for TextVQA tasks.
Image-to-Text
Transformers Supports Multiple Languages

G
microsoft
62
4
Git Large Vqav2
MIT
GIT is a Transformer decoder based on CLIP image tokens and text tokens, trained on large-scale image-text pairs, suitable for tasks like visual question answering.
Image-to-Text
Transformers Supports Multiple Languages

G
microsoft
401
17
Featured Recommended AI Models